Goto

Collaborating Authors

 woodbury transformation




Woodbury Transformations for Deep Generative Flows

Neural Information Processing Systems

Normalizing flows are deep generative models that allow efficient likelihood calculation and sampling. The core requirement for this advantage is that they are constructed using functions that can be efficiently inverted and for which the determinant of the function's Jacobian can be efficiently computed. Researchers have introduced various such flow operations, but few of these allow rich interactions among variables without incurring significant computational costs. In this paper, we introduce Woodbury transformations, which achieve efficient invertibility via the Woodbury matrix identity and efficient determinant calculation via Sylvester's determinant identity. In contrast with other operations used in state-of-the-art normalizing flows, Woodbury transformations enable (1) high-dimensional interactions, (2) efficient sampling, and (3) efficient likelihood evaluation. Other similar operations, such as 1x1 convolutions, emerging convolutions, or periodic convolutions allow at most two of these three advantages. In our experiments on multiple image datasets, we find that Woodbury transformations allow learning of higher-likelihood models than other flow architectures while still enjoying their efficiency advantages.




Review for NeurIPS paper: Woodbury Transformations for Deep Generative Flows

Neural Information Processing Systems

The paper proposes to parameterize a linear transformation as a low-rank update to an identity matrix, and then use the Woodbury matrix identity to efficiently compute its inverse and the Sylvester determinant identity to efficiently compute its determinant. Some reviewers expressed concerns regarding novelty, which I share. Indeed, the proposed linear flows are a fairly straightforward application of well-known matrix-algebra techniques for inverting and calculating the determinant of low-rank updates. For that reason, I doubt this paper contains much new information for normalizing-flow experts, although it may be useful to a broader machine-learning audience. Having said that, this paper is well-written and well-executed, contains some novel extensions to the basic idea, and it's likely the first published version of Woodbury flows with careful experimental comparisons to alternatives.


Woodbury Transformations for Deep Generative Flows

Neural Information Processing Systems

Normalizing flows are deep generative models that allow efficient likelihood calculation and sampling. The core requirement for this advantage is that they are constructed using functions that can be efficiently inverted and for which the determinant of the function's Jacobian can be efficiently computed. Researchers have introduced various such flow operations, but few of these allow rich interactions among variables without incurring significant computational costs. In this paper, we introduce Woodbury transformations, which achieve efficient invertibility via the Woodbury matrix identity and efficient determinant calculation via Sylvester's determinant identity. In contrast with other operations used in state-of-the-art normalizing flows, Woodbury transformations enable (1) high-dimensional interactions, (2) efficient sampling, and (3) efficient likelihood evaluation.


Woodbury Transformations for Deep Generative Flows

Lu, You, Huang, Bert

arXiv.org Machine Learning

Normalizing flows are deep generative models that allow efficient likelihood calculation and sampling. The core requirement for this advantage is that they are constructed using functions that can be efficiently inverted and for which the determinant of the function's Jacobian can be efficiently computed. Researchers have introduced various such flow operations, but few of these allow rich interactions among variables without incurring significant computational costs. In this paper, we introduce Woodbury transformations, which achieve efficient invertibility via the Woodbury matrix identity and efficient determinant calculation via Sylvester's determinant identity. In contrast with other operations used in state-of-the-art normalizing flows, Woodbury transformations enable (1) high-dimensional interactions, (2) efficient sampling, and (3) efficient likelihood evaluation. Other similar operations, such as 1x1 convolutions, emerging convolutions, or periodic convolutions allow at most two of these three advantages. In our experiments on multiple image datasets, we find that Woodbury transformations allow learning of higher-likelihood models than other flow architectures while still enjoying their efficiency advantages.